Explaining Deep Learning Models for Tabular Data Using Layer-Wise Relevance Propagation

نویسندگان

چکیده

Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability deep is well-known challenge, further challenge clarity explanation itself for relevant stakeholders model. Layer-wise Relevance Propagation (LRP), an established technique developed computer vision, provides intuitive human-readable heat maps input images. We present novel application LRP with tabular datasets containing mixed data (categorical numerical) using neural network (1D-CNN), Credit Card Fraud detection Telecom Customer Churn prediction use cases. show how more effective than traditional concepts Local Interpretable Model-agnostic Explanations (LIME) Shapley Additive (SHAP) explainability. This effectiveness both local sample level holistic over whole testing set. also discuss significant computational time advantage (1–2 s) LIME (22 SHAP (108 on same laptop, thus potential real scenarios. In addition, our validation has highlighted features enhancing performance, opening up new area research XAI as approach feature subset selection.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Layer-wise Relevance Propagation for Deep Neural Network Architectures

We present the application of layer-wise relevance propagation to several deep neural networks such as the BVLC reference neural net and googlenet trained on ImageNet and MIT Places datasets. Layerwise relevance propagation is a method to compute scores for image pixels and image regions denoting the impact of the particular image region on the prediction of the classifier for one particular te...

متن کامل

Layer-wise learning of deep generative models

When using deep, multi-layered architectures to build generative models of data, it is difficult to train all layers at once. We propose a layer-wise training procedure admitting a performance guarantee compared to the global optimum. It is based on an optimistic proxy of future performance, the best latent marginal. We interpret autoencoders in this setting as generative models, by showing tha...

متن کامل

Layer-wise training of deep generative models

When using deep, multi-layered architectures to build generative models of data, it is difficult to train all layers at once. We propose a layer-wise training procedure admitting a performance guarantee compared to the global optimum. It is based on an optimistic proxy of future performance, the best latent marginal. We interpret autoencoders in this setting as generative models, by showing tha...

متن کامل

On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation

Understanding and interpreting classification decisions of automated image classification systems is of high value in many applications, as it allows to verify the reasoning of the system and provides additional information to the human expert. Although machine learning methods are solving very successfully a plethora of tasks, they have in most cases the disadvantage of acting as a black box, ...

متن کامل

Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation

Complex nonlinear models such as deep neural network (DNNs) have become an important tool for image classification, speech recognition, natural language processing, and many other fields of application. These models however lack transparency due to their complex nonlinear structure and to the complex data distributions to which they typically apply. As a result, it is difficult to fully charact...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Applied sciences

سال: 2021

ISSN: ['2076-3417']

DOI: https://doi.org/10.3390/app12010136